Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available September 8, 2026
- 
            Ibaraki, S; Chen, X (Ed.)A spring in parallel with an effort source (e.g., electric motor or human muscle) can reduce its energy consumption and effort (i.e., torque or force) depending on the spring stiffness, spring preload, and actuation task. However, selecting the spring stiffness and preload that guarantees effort or energy reduction for an arbitrary set of tasks is a design challenge. This work formulates a convex optimization problem to guarantee that a parallel spring reduces the root-mean-square source effort or energy consumption for multiple tasks. Specifically, we guarantee the benefits across multiple tasks by enforcing a set of convex quadratic constraints in our optimization variables, the parallel spring stiffness and preload. These quadratic constraints are equivalent to ellipses in the stiffness and preload plane; any combination of stiffness and preload inside the ellipse represents a parallel spring that minimizes effort source or energy consumption with respect to an actuator without a spring. This geometric interpretation intuitively guides the stiffness and preload selection process. We analytically and experimentally prove the convex quadratic function of the spring stiffness and preload. As applications, we analyze the stiffness and preload selection of a parallel spring for a knee exoskeleton using human muscle as the effort source and a prosthetic ankle powered by electric motors. The source code associated with our framework is available as supplemental open-source software.more » « lessFree, publicly-accessible full text available July 23, 2026
- 
            Large language models (LLMs) have demonstrated revolutionary capabilities in understanding complex contexts and performing a wide range of tasks. However, LLMs can also answer questions that are unethical or harmful, raising concerns about their applications. To regulate LLMs' responses to such questions, a training strategy called alignment can help. Yet, alignment can be unexpectedly compromised when fine-tuning an LLM for downstream tasks. This paper focuses on recovering the alignment lost during fine-tuning. We observe that there are two distinct directions inherent in an aligned LLM: the aligned direction and the harmful direction. An LLM is inclined to answer questions in the aligned direction while refusing queries in the harmful direction. Therefore, we propose to recover the harmful direction of the fine-tuned model that has been compromised. Specifically, we restore a small subset of the fine-tuned model's weight parameters from the original aligned model using gradient descent. We also introduce a rollback mechanism to avoid aggressive recovery and maintain downstream task performance. Our evaluation on 125 fine-tuned LLMs demonstrates that our method can reduce their harmful rate (percentage of answering harmful questions) from 33.25% to 1.74%, without sacrificing task performance much. In contrast, the existing methods either only reduce the harmful rate to a limited extent or significantly impact the normal functionality. Our code is available at https://github.com/kangyangWHU/LLMAlignmentmore » « lessFree, publicly-accessible full text available May 12, 2026
- 
            Free, publicly-accessible full text available May 6, 2026
- 
            Free, publicly-accessible full text available January 1, 2026
- 
            Free, publicly-accessible full text available January 6, 2026
- 
            Free, publicly-accessible full text available March 1, 2026
- 
            Abstract We propose a generic compiler that can convert any zero-knowledge (ZK) proof for SIMD circuits to general circuits efficiently, and an extension that can preserve the space complexity of the proof systems. Our compiler can immediately produce new results improving upon state of the art.By plugging in our compiler to Antman, an interactive sublinear-communication protocol, we improve the overall communication complexity for general circuits from$$\mathcal {O}(C^{3/4})$$ to$$\mathcal {O}(C^{1/2})$$ . Our implementation shows that for a circuit of size$$2^{27}$$ , it achieves up to$$83.6\times $$ improvement on communication compared to the state-of-the-art implementation. Its end-to-end running time is at least$$70\%$$ faster in a 10Mbps network.Using the recent results on compressed$$\varSigma $$ -protocol theory, we obtain a discrete-log-based constant-round zero-knowledge argument with$$\mathcal {O}(C^{1/2})$$ communication and common random string length, improving over the state of the art that has linear-size common random string and requires heavier computation.We improve the communication of a designatedn-verifier zero-knowledge proof from$$\mathcal {O}(nC/B+n^2B^2)$$ to$$\mathcal {O}(nC/B+n^2)$$ .To demonstrate the scalability of our compilers, we were able to extract a commit-and-prove SIMD ZK from Ligero and cast it in our framework. We also give one instantiation derived from LegoSNARK, demonstrating that the idea of CP-SNARK also fits in our methodology.more » « lessFree, publicly-accessible full text available January 1, 2026
- 
            Free, publicly-accessible full text available February 1, 2026
- 
            This article presents a novel system,LLDPC,1which brings Low-Density Parity-Check (LDPC) codes into Long Range (LoRa) networks to improve Forward Error Correction, a task currently managed by less efficient Hamming codes. Three challenges in achieving this are addressed: First, Chirp Spread Spectrum (CSS) modulation used by LoRa produces only hard demodulation outcomes, whereas LDPC decoding requires Log-Likelihood Ratios (LLR) for each bit. We solve this by developing a CSS-specific LLR extractor. Second, we improve LDPC decoding efficiency by using symbol-level information to fine-tune LLRs of error-prone bits. Finally, to minimize the decoding latency caused by the computationally heavy Soft Belief Propagation (SBP) algorithm typically used in LDPC decoding, we apply graph neural networks to accelerate the process. Our results show thatLLDPCextends default LoRa’s lifetime by 86.7% and reduces SBP algorithm decoding latency by 58.09×.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
